Role: Data Engineer with Azure
Location: Pittsburgh, PA
Hire Mode: Fulltime
-Hybrid
Design, construct, and maintain scalable data management systems using Azure Data bricks, ensuring they meet end-user
expectations. Supervise the upkeep of existing data infrastructure workflows to ensure continuous service delivery. Create
data processing pipelines utilizing Data bricks Notebooks, Spark SQL, Python and other Data bricks tools. Oversee and lead the
module through planning, estimation, implementation, monitoring and tracking.
· Ability to work independently and multi-task effectively.
· Configure system settings and options and execute unit/integration testing.
· Develop end-user Release Notes, training materials and deliver training to a broad user base.
· Identify and communicate areas for improvement Demonstrate high attention to detail, should work in a dynamic
environment whilst maintaining high quality standards, a natural aptitude to develop good internal working
relationships and a flexible work ethic.
· Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT)
Over 8 + years of experience in data engineering, with expertise in Azure Databricks, MSSQL, Lake Flow, Python and
supporting Azure technology.
· Design, build, test, and maintain highly scalable data management systems using Azure Databricks.
· Create data processing pipelines utilizing Databricks Notebooks, Spark SQL.
· Integrate Azure Databricks with other Azure services like Azure Data Lake Storage, Azure SQL Data Warehouse.
· Design and implement robust ETL pipelines using Databricks, ensuring data quality and integrity.
· Design and implement effective data models, schemas and data governance using the Databricks environment.
· Develop and optimize PySpark/Python code for data processing tasks.
· Assist stakeholders with data-related technical issues and support their data infrastructure needs.
· Develop and maintain documentation for data pipeline architecture, development processes, and data governance.
· Data Warehousing: In-depth knowledge of data warehousing concepts, architecture, and implementation, including
experience with various data warehouse platforms.
· Data Quality – implement data quality rules using Databricks and external platforms like IDQ.
· Extremely strong organizational and analytical skills with strong attention to detail
· Strong track record of excellent results delivered to internal and external clients.
· Excellent problem-solving skills, with ability to work independently or as part of team.
· Strong communication and interpersonal skills, with ability to effectively engage with both technical and non-
technical stakeholders.
· Able to work independently without the needs for close supervision and collaboratively as part of cross-team
efforts.